73 research outputs found

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Two-Stage Stopping Procedures Based on Standardized Time Series

    No full text
    We propose some new two-stage stopping procedures to construct absolute-width and relative-width confidence intervals for a simulation estimator of the steady-state mean of a stochastic process. The procedures are based on the method of standardized time series proposed by Schruben and on Stein's two-stage sampling scheme. We prove that our two-stage procedures give rise to asymptotically valid confidence intervals (as the prescribed length of the confidence interval approaches zero and the size of the first stage grows to infinity). The sole assumption required is that the stochastic process satisfy a functional central limit theorem.simulation, output analysis, stopping rules, diffusion approximations

    On Derivative Estimation of the Mean Time to Failure in Simulations of Highly Reliable Markovian Systems

    No full text
    The mean time to failure (MTTF) of a Markovian system can be expressed as a ratio of two expectations. For highly reliable Markovian systems, the resulting ratio formula consists of one expectation that cannot be estimated with bounded relative error when using standard simulation, while the other, which we call a non-rare expectation, can be estimated with bounded relative error. We show that some derivatives of the nonrare expectation cannot be estimated with bounded relative error when using standard simulation, which in turn may lead to an estimator of the derivative of the MTTF that has unbounded relative error. However, if particular importance-sampling methods (e.g., balanced failure biasing) are used, then the estimator of the derivative of the non-rare expectation will have bounded relative error, which (under certain conditions) will yield an estimator of the derivative of the MTTF with bounded relative error. Subject classifications: Probability, stochastic model applicatio..

    Asymptotics of Likelihood Ratio Derivative Estimators in Simulations of Highly Reliable Markovian Systems

    No full text
    We discuss the estimation of derivatives of a performance measure using the likelihood ratio method in simulations of highly reliable Markovian systems. We compare the difficulties of estimating the performance measure and of estimating its partial derivatives with respect to component failure rates as the component failure rates tend to 0 and the component repair rates remain fixed. We first consider the case when the quantities are estimated using naive simulation; i.e., when no variance reduction technique is used. In particular, we prove that in the limit, some of the partial derivatives can be estimated as accurately as the performance measure itself. This result is of particular interest in light of the somewhat pessimistic empirical results others have obtained when applying the likelihood ratio method to other types of systems. However, the result only holds for certain partial derivatives of the performance measure when using naive simulation. More specifically, we can estimate a certain partial derivative with the same relative accuracy as the performance measure if the partial derivative is associated with a component either having one of the largest failure rates or whose failure can trigger a failure transition on one of the "most likely paths to failure." Also, we develop a simple criterion to determine which partial derivatives will satisfy either of these properties. In particular, we can identify these derivatives using a sensitivity measure which can be calculated for each type of component. We also examine the limiting behavior of the estimates of the performance measure and its derivatives which are obtained when an importance sampling scheme known as balanced failure biasing is used. In particular, we show that the estimates of all derivatives can be improved. In contrast to the situation that arose when using naive simulation, we prove that in the limit, all derivatives can be estimated as accurately as the performance measure when balanced failure biasing is employed. Finally, we formalize the notion of a "most likely path to failure" in the setting of highly reliable Markovian systems. We accomplish this by proving a conditional limit theorem for the distribution of the sample paths leading to a system failure, given that a system failure occurs before the system returns to the state with all components operational. We use this result to establish our other results.simulation, gradient estimation, likelihood ratios, highly reliable systems, importance sampling
    corecore